7 research outputs found

    Near-Optimal Set-Multilinear Formula Lower Bounds

    Get PDF

    Near Neighbor Search via Efficient Average Distortion Embeddings

    Get PDF
    A recent series of papers by Andoni, Naor, Nikolov, Razenshteyn, and Waingarten (STOC 2018, FOCS 2018) has given approximate near neighbour search (NNS) data structures for a wide class of distance metrics, including all norms. In particular, these data structures achieve approximation on the order of p for ?_p^d norms with space complexity nearly linear in the dataset size n and polynomial in the dimension d, and query time sub-linear in n and polynomial in d. The main shortcoming is the exponential in d pre-processing time required for their construction. In this paper, we describe a more direct framework for constructing NNS data structures for general norms. More specifically, we show via an algorithmic reduction that an efficient NNS data structure for a metric ? is implied by an efficient average distortion embedding of ? into ?? or the Euclidean space. In particular, the resulting data structures require only polynomial pre-processing time, as long as the embedding can be computed in polynomial time. As a concrete instantiation of this framework, we give an NNS data structure for ?_p with efficient pre-processing that matches the approximation factor, space and query complexity of the aforementioned data structure of Andoni et al. On the way, we resolve a question of Naor (Analysis and Geometry in Metric Spaces, 2014) and provide an explicit, efficiently computable embedding of ?_p, for p ? 1, into ?? with average distortion on the order of p. Furthermore, we also give data structures for Schatten-p spaces with improved space and query complexity, albeit still requiring exponential pre-processing when p ? 2. We expect our approach to pave the way for constructing efficient NNS data structures for all norms

    A #SAT Algorithm for Small Constant-Depth Circuits with PTF Gates

    Get PDF
    We show that there is a zero-error randomized algorithm that, when given a small constant-depth Boolean circuit C made up of gates that compute constant-degree Polynomial Threshold functions or PTFs (i.e., Boolean functions that compute signs of constant-degree polynomials), counts the number of satisfying assignments to C in significantly better than brute-force time. Formally, for any constants d,k, there is an epsilon > 0 such that the zero-error randomized algorithm counts the number of satisfying assignments to a given depth-d circuit C made up of k-PTF gates such that C has size at most n^{1+epsilon}. The algorithm runs in time 2^{n-n^{Omega(epsilon)}}. Before our result, no algorithm for beating brute-force search was known for counting the number of satisfying assignments even for a single degree-k PTF (which is a depth-1 circuit of linear size). The main new tool is the use of a learning algorithm for learning degree-1 PTFs (or Linear Threshold Functions) using comparison queries due to Kane, Lovett, Moran and Zhang (FOCS 2017). We show that their ideas fit nicely into a memoization approach that yields the #SAT algorithms

    Improved Low-Depth Set-Multilinear Circuit Lower Bounds

    Get PDF
    We prove strengthened lower bounds for constant-depth set-multilinear formulas. More precisely, we show that over any field, there is an explicit polynomial ff in VNP defined over n2n^2 variables, and of degree nn, such that any product-depth Δ\Delta set-multilinear formula computing ff has size at least nΩ(n1/Δ/Δ)n^{\Omega \left( n^{1/\Delta}/\Delta\right)}. The hard polynomial ff comes from the class of Nisan-Wigderson (NW) design-based polynomials. Our lower bounds improve upon the recent work of Limaye, Srinivasan and Tavenas (STOC 2022), where a lower bound of the form (logn)Ω(Δn1/Δ)(\log n)^{\Omega (\Delta n^{1/\Delta})} was shown for the size of product-depth Δ\Delta set-multilinear formulas computing the iterated matrix multiplication (IMM) polynomial of the same degree and over the same number of variables as ff. Moreover, our lower bounds are novel for any Δ2\Delta\geq 2. For general set-multilinear formulas, a lower bound of the form nΩ(logn) n^{\Omega(\log n)} was already obtained by Raz (J. ACM 2009) for the more general model of multilinear formulas. The techniques of LST give a different route to set-multilinear formula lower bounds, and allow them to obtain a lower bound of the form (logn)Ω(logn)(\log n)^{\Omega(\log n)} for the size of general set-multilinear formulas computing the IMM polynomial. Our proof techniques are another variation on those of LST, and enable us to show an improved lower bound (matching that of Raz) of the form nΩ(logn)n^{\Omega(\log n)}, albeit for the same polynomial ff in VNP (the NW polynomial). As observed by LST, if the same nΩ(logn)n^{\Omega(\log n)} size lower bounds for unbounded-depth set-multilinear formulas could be obtained for the IMM polynomial, then using the self-reducibility of IMM and using hardness escalation results, this would imply super-polynomial lower bounds for general algebraic formulas.Comment: 14 pages, To appear in Computational Complexity Conference (CCC) 202
    corecore